Comments about the article in Nature: AI weapons: Russia’s war in Ukraine shows why the world must enact a ban

Following is a discussion about this Comment in Nature Vol 614 23 February 2023, by Stuart Russell.
To study the full text select this link: https://www.nature.com/articles/d41586-023-00511-5
https://www.rijksoverheid.nl/ministeries/ministerie-van-buitenlandse-zaken/evenementen/reaim In the last paragraph I explain my own opinion.

Reflection


Introduction

But this simple ‘us versus them’ narrative obscures a disturbing trend — weapons are becoming ever smarter.
Yes, and that is on the long run, for humanity a troublesome fact.
That is pushing us ever closer to a dangerous world where lethal autonomous weapon systems are cheap and widely available tools for inflicting mass casualties — weapons of mass destruction found in every arms supermarket, for sale to any dictator, warlord or terrorist.
Yes. But the danger is not only autonomous. The biggest problem is how reliable these systems are. Semi autonomous system are maybe more dangerous. They also can be called weapons of mass destruction.
As a start, governments need to begin serious negotiations on a treaty to ban anti-personnel autonomous weapons, at the very least.
Governments should try to control all types of weaponary, but it seems a loosing battle.
Maybe a completely different strategy should be designed. First of all you should have an objective that will be beneficiary for the whole humanity.
Professional societies in AI and robotics should develop and enforce codes of conduct outlawing work on lethal autonomous weapons.
Academical societies should take the lead.
And people the world over should understand that allowing algorithms to decide to kill humans is a terrible idea.
It should be understood that all algorithms are develloped by humans. The same with testing.

1. Pressures leading to full autonomy

What exactly are ‘lethal autonomous weapons systems’?
This discussion should be handled in a broader context.
Current AI systems exhibit all the required capabilities — planning missions, navigating, 3D mapping, recognizing targets, flying through cities and buildings, and coordinating attacks. Lots of platforms are available.
This discussion should not be limited to AI systems. But to all types of weaponary.
The road to full autonomy in the Russia–Ukraine conflict begins with various types of semi-autonomous weapon already in use.
The road to full autonomy in the Russia-Ukraine conflict should be bended into the direction of how can we prevent such types of conflicts. I understand that such a discussion is extremely difficult. The first step is that we all should agree that something should change into a certain direction.
Whereas the Kargu and Harpy are ‘kamikaze’ weapons, the Chinese Ziyan Blowfish A3 is an autonomous helicopter equipped with a machine gun and several unguided gravity bombs.
Okay, but not okay.
All of these systems are described as having both autonomous and remotely operated modes, making it difficult to know whether any given attack was carried out by a human operator.
Any person who uses any weapon to reach a certain objective, which can kill or wound a human, this person is responsible. In this definition a weapon is not only a device but also all the people involved in operating the device. In many cases the person responsible is an officer or higher who gives the command to...; it is not the human operator.

2. Benefits and problems

Why are militaries pursuing machines that can decide for themselves whether to kill humans?
In that case the highest militar in rank is responsible for its use.
Anyway why should any militar use such a system? There is always a chance that the people who use these systems are killed themselves, as a result of a decision by the equipment.
My understanding is that any piece of military equipment in order to use, requires a target. This target has to be decided in advance before the equipment is brought into action. That means
But, unlike remotely operated weapons, autonomous weapons can function even when electronic communication is impossible because of jamming — and can react even faster than any weapon remotely controlled by a human.
That is 'interesting', but it can easily result in errors.
Seriously:
Another point often advanced is that, compared with other modes of warfare, the ability of lethal autonomous weapons to distinguish civilians from combatants might reduce collateral damage.
How are these systems tested? How do you know that they reduce collateral damage? That is a horrible exercise, prone to errors.
The United States, along with Russia, has been citing this supposed benefit with the effect of blocking multilateral negotiations at the Convention on Certain Conventional Weapons (CCW) in Geneva, Switzerland — talks that have occurred sporadically since 2014.
This type of reasoning is different from the ideology in the USA that guns don't kill, but it are the users.

3. Political action at a standstill

4. A pragmatic way forward

5. Next steps


Reflection 1


Reflection 2


If you want to give a comment you can use the following form Comment form


Created: 20 December 2022

Back to my home page Index
Back to Nature comments Nature Index